blackbox function
- North America > Mexico > Quintana Roo > Cancún (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
- Oceania > Australia (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (2 more...)
- Oceania > Australia (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
- (2 more...)
- North America > Mexico > Quintana Roo > Cancún (0.04)
- North America > Canada > British Columbia > Vancouver (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
Certifying Out-of-Domain Generalization for Blackbox Functions
Weber, Maurice, Li, Linyi, Wang, Boxin, Zhao, Zhikuan, Li, Bo, Zhang, Ce
Certifying the robustness of model performance under bounded data distribution drifts has recently attracted intensive interest under the umbrella of distributional robustness. However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss. In this paper, we focus on the problem of certifying distributional robustness for blackbox models and bounded loss functions, and propose a novel certification framework based on the Hellinger distance. Our certification technique scales to ImageNet-scale datasets, complex models, and a diverse set of loss functions. We then focus on one specific application enabled by such scalability and flexibility, i.e., certifying out-of-domain generalization for large neural networks and loss functions such as accuracy and AUC. We experimentally validate our certification method on a number of datasets, ranging from ImageNet, where we provide the first non-vacuous certified out-of-domain generalization, to smaller classification tasks where we are able to compare with the state-of-the-art and show that our method performs considerably better.
- Europe > Switzerland > Zürich > Zürich (0.04)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Maryland > Baltimore (0.04)
Mean-Variance Analysis in Bayesian Optimization under Uncertainty
Iwazaki, Shogo, Inatsu, Yu, Takeuchi, Ichiro
Decision making in an uncertain environment has been studied in various domains. For example, in financial engineering, the mean-variance analysis [1, 2, 3] has been introduced as a framework for making investment decisions, taking into account the tradeoff between the return (mean) and the risk (variance) of the investment. In this paper we study active learning (AL) in an uncertain environment. In many practical AL problems, there are two types of parameters called design parameters and environmental parameters. For example, in a product design, while the design parameters are fully controllable, the environmental parameters vary depending on the environment in which the product is used. In this paper, we examine AL problems under such an uncertain environment, where the goal is to efficiently find the optimal design parameters by properly taking into account the uncertainty of the environmental parameters. Concretely, let f(x, w) be a blackbox function indicating the performance of a product, where x X is the set of controllable design parameters and w Ω is the set of uncontrollable environmental parameters whose uncertainty is characterized by a probability distribution p(w).
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- Europe > Sweden > Stockholm > Stockholm (0.04)
- Europe > Italy > Sicily > Palermo (0.04)
- (2 more...)
Adaptive Sample-Efficient Blackbox Optimization via ES-active Subspaces
Choromanski, Krzysztof, Pacchiano, Aldo, Parker-Holder, Jack, Tang, Yunhao
We present a new algorithm ASEBO for conducting optimization of high-dimensional blackbox functions. ASEBO adapts to the geometry of the function and learns optimal sets of sensing directions, which are used to probe it, on-the-fly. It addresses the exploration-exploitation trade-off of blackbox optimization, where each single function query is expensive, by continuously learning the bias of the lower-dimensional model used to approximate gradients of smoothings of the function with compressed sensing and contextual bandits methods. To obtain this model, it uses techniques from the emerging theory of active subspaces in the novel ES blackbox optimization context. As a result, ASEBO learns the dynamically changing intrinsic dimensionality of the gradient space and adapts to the hardness of different stages of the optimization without external supervision. Consequently, it leads to more sample-efficient blackbox optimization than state-of-the-art algorithms. We provide rigorous theoretical justification of the effectiveness of our method. We also empirically evaluate it on the set of reinforcement learning policy optimization tasks as well as functions from the recently open-sourced Nevergrad library, demonstrating that it consistently learns optimal inputs with fewer queries to a blackbox function than other methods.
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > Mexico (0.14)
- North America > Canada (0.14)
- (3 more...)
- Transportation > Air (0.46)
- Energy > Oil & Gas > Upstream (0.34)
Bayesian Optimization for Hyperparameter Tuning - Arimo
Bayesian Optimization helped us find a hyperparameter configuration that is better than the one found by Random Search for a neural network on the San Francisco Crimes dataset. People who are familiar with Machine Learning might want to fast forward to Section 3 for details. The code to reproduce the experiments can be found here. Hyperparameter tuning may be one of the most tricky, yet interesting, topics in Machine Learning. For most Machine Learning practitioners, mastering the art of tuning hyperparameters requires not only a solid background in Machine Learning algorithms, but also extensive experience working with real-world datasets.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (0.69)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.53)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (0.31)